Bayesian Networks
Review of Artificial Intelligence and Mobile Robotics: Case Studies of Successful Robot Systems
Today, mobile robotics is an increasingly important bridge between the two areas. It is advancing the theory and practice of cooperative cognition, perception, and action and serving to reunite planning techniques with sensing and real-world performance. Further, developments in mobile robotics can have important practical economic and military consequences. For some time now, amateurs, hobbyists, students, and researchers have had access to how-to books on the low-level mechanical and electronic aspects of mobile-robot construction (Everett 1995; McComb 1987). The famous Massachusetts Institute of Technology (MIT) 6.270 robot-building course has contributed course notes and hardware kits that are now available commercially and in the form of an influential book (Jones 1998; Jones and Flynn 1993).
PSINET: Assisting HIV Prevention Among Homeless Youth by Planning Ahead
Homeless youth are prone to human immunodeficiency virus (HIV) due to their engagement in high-risk behavior such as unprotected sex, sex under influence of drugs, and so on. Many nonprofit agencies conduct interventions to educate and train a select group of homeless youth about HIV prevention and treatment practices and rely on word-of-mouth spread of information through their one single social network Previous work in strategic selection of intervention participants does not handle uncertainties in the social networks' structure and evolving network state, potentially causing significant shortcomings in spread of information. Thus, we developed PSINET, a decision-support system to aid the agencies in this task. PSINET includes the following key novelties: (1) it handles uncertainties in network structure and evolving network state; (2) it addresses these uncertainties by using POMDPs in influence maximization; and (3) it provides algorithmic advances to allow high-quality approximate solutions for such POMDPs. Simulations show that PSINET achieves around 60 percent more information spread over the current state of the art.
Sequential Decision Making in Computational Sustainability Through Adaptive Submodularity
Such problems are generally notoriously difficult. In this article, we review the recently discovered notion of adaptive submodularity, an intuitive diminishing returns condition that generalizes the classical notion of submodular set functions to sequential decision problems. Problems exhibiting the adaptive submodularity property can be efficiently and provably nearoptimally solved using simple myopic policies. We illustrate this concept in several case studies of interest in computational sustainability: First, we demonstrate how it can be used to efficiently plan for resolving uncertainty in adaptive management scenarios. Then, we show how it applies to dynamic conservation planning for protecting endangered species, a case study carried out in collaboration with the U.S. Geological Survey and the U.S. Fish and Wildlife Service.
Loan Prediction โ Using PCA and Naive Bayes Classification with R
So, it is very important to predict the loan type and loan amount based on the banks' data. In this blog post, we will discuss about how Naive Bayes Classification model using R can be used to predict the loans. As there are more than two independent variables in customer data, it is difficult to plot chart as two dimensions are needed to better visualize how Machine Learning models work. In this blog post, Naive Bayes Classification Model with R is used.
Decision Making in Complex Multiagent Contexts: A Tale of Two Frameworks
Doshi, Prashant J. (University of Georgia)
Decision making is a key feature of autonomous systems. The physical context often includes other interacting autonomous systems, typically called agents. In this article, I focus on decision making in a multiagent context with partial information about the problem. I put the two frameworks, decentralized partially observable Markov decision process (Dec-POMDP) and the interactive partially observable Markov decision process (I-POMDP), in context and review the foundational algorithms for these frameworks, while briefly discussing the advances in their specializations.
Agent-Centered Search
In this article, I describe agent-centered search (also called real-time search or local search) and illustrate this planning paradigm with examples. Agent-centered search methods interleave planning and plan execution and restrict planning to the part of the domain around the current state of the agent, for example, the current location of a mobile robot or the current board position of a game. These methods can execute actions in the presence of time constraints and often have a small sum of planning and execution cost, both because they trade off planning and execution cost and because they allow agents to gather information early in nondeterministic domains, which reduces the amount of planning they have to perform for unencountered situations. Agent-centered search methods have been applied to a variety of domains, including traditional search, strips-type planning, moving-target search, planning with totally and partially observable Markov decision process models, reinforcement learning, constraint satisfaction, and robot navigation.
Reports on the AAAI Fall Symposia
Giacomo, Giuseppe De, desJardins, Marie, Canamero, Dolores, Wasson, Glenn, Littman, Michael, Allwein, Gerard, Marriott, Kim, Meyer, Bernd, Webb, Barbara, Consi, Tom
The Association for the Advancement of Artificial Intelligence (AAAI) held its 1998 Fall Symposium Series on 23 to 25 October at the Omni Rosen Hotel in Orlando, Florida. This article contains summaries of seven of the symposia that were conducted: (1) Cognitive Robotics; (2) Distributed, Continual Planning; (3) Emotional and Intelligent: The Tangled Knot of Cognition; (4) Integrated Planning for Autonomous Agent Architectures; (5) Planning with Partially Observable Markov Decision Processes; (6) Reasoning with Visual and Diagrammatic Representations; and (7) Robotics and Biology: Developing Connections.
Inference in Bayesian Networks
A Bayesian network is a compact, expressive representation of uncertain relationships among parameters in a domain. In this article, I introduce basic methods for computing with Bayesian networks, starting with the simple idea of summing the probabilities of events of interest. The article introduces major current methods for exact computation, briefly surveys approximation methods, and closes with a brief discussion of open issues.
An Overview of Some Recent Developments in Bayesian Problem-Solving Techniques
The last few years have seen a surge in interest in the use of techniques from Bayesian decision theory to address problems in AI. Decision theory provides a normative framework for representing and reasoning about decision problems under uncertainty. The articles cover the topics of inference in Bayesian networks, decision-theoretic planning, and qualitative decision theory. Here, I provide a brief introduction to Bayesian networks and then cover applications of Bayesian problem-solving techniques, knowledge-based model construction and structured representations, and the learning of graphic probability models.
Bayesian Networks without Tears.
I give an introduction to Bayesian networks for AI researchers with a limited grounding in probability theory. Indeed, it is probably fair to say that Bayesian networks are to a large segment of the AI-uncertainty community what resolution theorem proving is to the AIlogic community. Nevertheless, despite what seems to be their obvious importance, the ideas and techniques have not spread much beyond the research community responsible for them. I hope to rectify this situation by making Bayesian networks more accessible to the probabilistically unsophisticated.